149 research outputs found

    A study of cyber hate on Twitter with implications for social media governance strategies

    Get PDF
    This paper explores ways in which the harmful effects of cyber hate may be mitigated through mechanisms for enhancing the self-governance of new digital spaces. We report findings from a mixed methods study of responses to cyber hate posts, which aimed to: (i) understand how people interact in this context by undertaking qualitative interaction analysis and developing a statistical model to explain the volume of responses to cyber hate posted to Twitter, and (ii) explore use of machine learning techniques to assist in identifying cyber hate counter-speech

    From computer ethics to responsible research and innovation in ICT: The transition of reference discourses informing ethics-related research in information systems

    Get PDF
    The discourse concerning computer ethics qualifies as a reference discourse for ethics-related IS research. Theories, topics and approaches of computer ethics are reflected in IS. The paper argues that there is currently a broader development in the area of research governance, which is referred to as ‘responsible research and innovation’ (RRI). RRI applied to information and communication technology (ICT) addresses some of the limitations of computer ethics and points toward a broader approach to the governance of science, technology and innovation. Taking this development into account will help IS increase its relevance and make optimal use of its established strengths

    Refining Vision Videos

    Full text link
    [Context and motivation] Complex software-based systems involve several stakeholders, their activities and interactions with the system. Vision videos are used during the early phases of a project to complement textual representations. They visualize previously abstract visions of the product and its use. By creating, elaborating, and discussing vision videos, stakeholders and developers gain an improved shared understanding of how those abstract visions could translate into concrete scenarios and requirements to which individuals can relate. [Question/problem] In this paper, we investigate two aspects of refining vision videos: (1) Refining the vision by providing alternative answers to previously open issues about the system to be built. (2) A refined understanding of the camera perspective in vision videos. The impact of using a subjective (or "ego") perspective is compared to the usual third-person perspective. [Methodology] We use shopping in rural areas as a real-world application domain for refining vision videos. Both aspects of refining vision videos were investigated in an experiment with 20 participants. [Contribution] Subjects made a significant number of additional contributions when they had received not only video or text but also both - even with very short text and short video clips. Subjective video elements were rated as positive. However, there was no significant preference for either subjective or non-subjective videos in general.Comment: 15 pages, 25th International Working Conference on Requirements Engineering: Foundation for Software Quality 201

    Distilling Privacy Requirements for Mobile Applications

    Get PDF
    As mobile computing applications have become commonplace, it is increasingly important for them to address end-users’ privacy requirements. Privacy requirements depend on a number of contextual socio-cultural factors to which mobility adds another level of contextual variation. However, traditional requirements elicitation methods do not sufficiently account for contextual factors and therefore cannot be used effectively to represent and analyse the privacy requirements of mobile end users. On the other hand, methods that do investigate contextual factors tend to produce data that does not lend itself to the process of requirements extraction. To address this problem we have developed a Privacy Requirements Distillation approach that employs a problem analysis framework to extract and refine privacy requirements for mobile applications from raw data gathered through empirical studies involving end users. Our approach introduces privacy facets that capture patterns of privacy concerns which are matched against the raw data. We demonstrate and evaluate our approach using qualitative data from an empirical study of a mobile social networking application

    Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

    Get PDF
    We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare

    Six Human-Centered Artificial Intelligence Grand Challenges

    Get PDF
    Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies

    Systems thinking and efficiency under emissions constraints: Addressing rebound effects in digital innovation and policy

    Get PDF
    Innovations and efficiencies in digital technology have lately been depicted as paramount in the green transition to enable the reduction of greenhouse gas emissions, both in the information and communication technology (ICT) sector and the wider economy. This, however, fails to adequately account for rebound effects that can offset emission savings and, in the worst case, increase emissions. In this perspective, we draw on a transdisciplinary workshop with 19 experts from carbon accounting, digital sustainability research, ethics, sociology, public policy, and sustainable business to expose the challenges of addressing rebound effects in digital innovation processes and associated policy. We utilize a responsible innovation approach to uncover potential ways forward for incorporating rebound effects in these domains, concluding that addressing ICT-related rebound effects ultimately requires a shift from an ICT efficiency-centered perspective to a “systems thinking” model, which aims to understand efficiency as one solution among others that requires constraints on emissions for ICT environmental savings to be realized
    • 

    corecore